Layer-Wise Adaptive Model Aggregation for Scalable Federated Learning

نویسندگان

چکیده

In Federated Learning (FL), a common approach for aggregating local solutions across clients is periodic full model averaging. It is, however, known that different layers of neural networks can have degree discrepancy the clients. The conventional aggregation scheme does not consider such difference and synchronizes whole parameters at once, resulting in inefficient network bandwidth consumption. Aggregating are similar make meaningful training progress while increasing communication cost. We propose FedLAMA, layer-wise adaptive scalable FL. FedLAMA adjusts interval manner, jointly considering This fine-grained strategy enables to reduce cost without significantly harming accuracy. Our extensive empirical study shows that, as increases, remarkably smaller accuracy drop than aggregation, achieving comparable efficiency.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Practical Secure Aggregation for Federated Learning on User-Held Data

Secure Aggregation protocols allow a collection of mutually distrust parties, each holding a private value, to collaboratively compute the sum of those values without revealing the values themselves. We consider training a deep neural network in the Federated Learning model, using distributed stochastic gradient descent across user-held training data on mobile devices, wherein Secure Aggregatio...

متن کامل

Layer-wise learning of deep generative models

When using deep, multi-layered architectures to build generative models of data, it is difficult to train all layers at once. We propose a layer-wise training procedure admitting a performance guarantee compared to the global optimum. It is based on an optimistic proxy of future performance, the best latent marginal. We interpret autoencoders in this setting as generative models, by showing tha...

متن کامل

Deep Learning Layer-wise Learning of Feature Hierarchies

Hierarchical neural networks for object recognition have a long history. In recent years, novel methods for incrementally learning a hierarchy of features from unlabeled inputs were proposed as good starting point for supervised training. These deep learning methods— together with the advances of parallel computers—made it possible to successfully attack problems that were not practical before,...

متن کامل

A Three-Layer Model for Schema Management in Federated

This paper describes our use of object technology to provide a framework for interoperability between databases We are particularly interested in control ling the e ects on the federation of schema modi ca tion in local databases We describe two informal mod els for federated database design The abstract model describes the di erent metadata objects in the feder ation and how they relate to eac...

متن کامل

Model-based Bayesian Reinforcement Learning with Adaptive State Aggregation

Model-based Bayesian reinforcement learning provides an elegant way of incorporating model uncertainty for trading off between exploration and exploitation. We propose an extension of modelbased Bayesian RL to continuous state spaces. The key feature of our approach is its search through the space of model structures, thus adapting not only the model parameters but also the structure itself to ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i7.26023